Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 2469, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38291126

RESUMO

Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.


Assuntos
Perda Auditiva , Localização de Som , Realidade Virtual , Humanos , Audição/fisiologia , Localização de Som/fisiologia , Testes Auditivos
2.
Cogn Res Princ Implic ; 9(1): 4, 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191869

RESUMO

Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.


Assuntos
Movimentos da Cabeça , Localização de Som , Humanos , Som , Comportamento Exploratório , Processos Mentais
3.
Artigo em Inglês | MEDLINE | ID: mdl-37971362

RESUMO

Metacognition entails knowledge of one's own cognitive skills, perceived self-efficacy and locus of control when performing a task, and performance monitoring. Age-related changes in metacognition have been observed in metamemory, whereas their occurrence for hearing remained unknown. We tested 30 older and 30 younger adults with typical hearing, to assess if age reduces metacognition for hearing sentences in noise. Metacognitive monitoring for older and younger adults was overall comparable. In fact, the older group achieved better monitoring for words in the second part of the phrase. Additionally, only older adults showed a correlation between performance and perceived confidence. No age differentiation was found for locus of control, knowledge or self-efficacy. This suggests intact metacognitive skills for hearing in noise in older adults, alongside a somewhat paradoxical overconfidence in younger adults. These findings support exploiting metacognition for older adults dealing with noisy environments, since metacognition is central for implementing self-regulation strategies.

4.
Trends Hear ; 27: 23312165231182289, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37611181

RESUMO

Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva Unilateral , Perda Auditiva , Localização de Som , Percepção da Fala , Adulto , Humanos , Sinais (Psicologia) , Audição , Implante Coclear/métodos
5.
J Clin Med ; 12(6)2023 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-36983357

RESUMO

Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.

6.
Eur Arch Otorhinolaryngol ; 280(8): 3661-3672, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36905419

RESUMO

BACKGROUND AND PURPOSE: Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training.
Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS: During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS: Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Percepção da Fala , Humanos , Audição , Implante Coclear/métodos , Testes Auditivos/métodos
7.
Conscious Cogn ; 109: 103490, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36842317

RESUMO

In spoken languages, face masks represent an obstacle to speech understanding and influence metacognitive judgments, reducing confidence and increasing effort while listening. To date, all studies on face masks and communication involved spoken languages and hearing participants, leaving us with no insight on how masked communication impacts on non-spoken languages. Here, we examined the effects of face masks on sign language comprehension and metacognition. In an online experiment, deaf participants (N = 60) watched three parts of a story signed without mask, with a transparent mask or with an opaque mask, and answered questions about story content, as well as their perceived effort, feeling of understanding, and confidence in their answers. Results showed that feeling of understanding and perceived effort worsened as the visual condition changed from no mask to transparent or opaque masks, while comprehension of the story was not significantly different across visual conditions. We propose that metacognitive effects could be due to the reduction of pragmatic, linguistic and para-linguistic cues from the lower face, hidden by the mask. This reduction could impact on lower-face linguistic components perception, attitude attribution, classification of emotions and prosody of a conversation, driving the observed effects on metacognitive judgments but leaving sign language comprehension substantially unchanged, even if with a higher effort. These results represent a novel step towards better understanding what drives metacognitive effects of face masks while communicating face to face and highlight the importance of including the metacognitive dimension in human communication research.


Assuntos
Metacognição , Humanos , Compreensão , Máscaras , Fala , Percepção Auditiva
8.
Ear Hear ; 44(1): 189-198, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35982520

RESUMO

OBJECTIVES: We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN: In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS: During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS: Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Humanos , Percepção Auditiva/fisiologia , Implante Coclear/métodos , Audição/fisiologia , Testes Auditivos/métodos , Localização de Som/fisiologia , Estudos Cross-Over
9.
Sci Rep ; 12(1): 19036, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36351944

RESUMO

It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.


Assuntos
Córtex Auditivo , Surdez , Humanos , Estimulação Luminosa , Imageamento por Ressonância Magnética , Audição , Mapeamento Encefálico , Córtex Auditivo/fisiologia
10.
Front Hum Neurosci ; 16: 1026056, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36310849

RESUMO

Moving the head while a sound is playing improves its localization in human listeners, in children and adults, with or without hearing problems. It remains to be ascertained if this benefit can also extend to aging adults with hearing-loss, a population in which spatial hearing difficulties are often documented and intervention solutions are scant. Here we examined performance of elderly adults (61-82 years old) with symmetrical or asymmetrical age-related hearing-loss, while they localized sounds with their head fixed or free to move. Using motion-tracking in combination with free-field sound delivery in visual virtual reality, we tested participants in two auditory spatial tasks: front-back discrimination and 3D sound localization in front space. Front-back discrimination was easier for participants with symmetrical compared to asymmetrical hearing-loss, yet both groups reduced their front-back errors when head-movements were allowed. In 3D sound localization, free head-movements reduced errors in the horizontal dimension and in a composite measure that computed errors in 3D space. Errors in 3D space improved for participants with asymmetrical hearing-impairment when the head was free to move. These preliminary findings extend to aging adults with hearing-loss the literature on the advantage of head-movements on sound localization, and suggest that the disparity of auditory cues at the two ears can modulate this benefit. These results point to the possibility of taking advantage of self-regulation strategies and active behavior when promoting spatial hearing skills.

11.
PLoS One ; 17(4): e0263509, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35421095

RESUMO

Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Adaptação Fisiológica , Adulto , Percepção Auditiva , Estudos Cross-Over , Audição , Humanos
12.
Exp Brain Res ; 240(3): 813-824, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35048159

RESUMO

In noisy contexts, sound discrimination improves when the auditory sources are separated in space. This phenomenon, named Spatial Release from Masking (SRM), arises from the interaction between the auditory information reaching the ear and spatial attention resources. To examine the relative contribution of these two factors, we exploited an audio-visual illusion in a hearing-in-noise task to create conditions in which the initial stimulation to the ears is held constant, while the perceived separation between speech and masker is changed illusorily (visual capture of sound). In two experiments, we asked participants to identify a string of five digits pronounced by a female voice, embedded in either energetic (Experiment 1) or informational (Experiment 2) noise, before reporting the perceived location of the heard digits. Critically, the distance between target digits and masking noise was manipulated both physically (from 22.5 to 75.0 degrees) and illusorily, by pairing target sounds with visual stimuli either at same (audio-visual congruent) or different positions (15 degrees offset, leftward or rightward: audio-visual incongruent). The proportion of correctly reported digits increased with the physical separation between the target and masker, as expected from SRM. However, despite effective visual capture of sounds, performance was not modulated by illusory changes of target sound position. Our results are compatible with a limited role of central factors in the SRM phenomenon, at least in our experimental setting. Moreover, they add to the controversial literature on the limited effects of audio-visual capture in auditory stream separation.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Estimulação Acústica , Feminino , Audição , Humanos , Ruído , Fala
13.
Int J Audiol ; 61(7): 561-573, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34634214

RESUMO

OBJECTIVE: The aim of this study was to assess to what extent simultaneously-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. DESIGN: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of -3, -6, -9 dB; attentional resources focussed vs divided; spatial priors present vs absent). STUDY SAMPLE: Twenty-four normal-hearing adults, 20-41 years old (M = 23.5), were recruited in the study. RESULTS: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. CONCLUSIONS: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.


Assuntos
Pupila , Percepção da Fala , Adulto , Percepção Auditiva , Humanos , Esforço de Escuta , Pupila/fisiologia , Tempo de Reação , Percepção da Fala/fisiologia , Adulto Jovem
14.
Iperception ; 12(2): 2041669521998393, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35145616

RESUMO

Interactions with talkers wearing face masks have become part of our daily routine since the beginning of the COVID-19 pandemic. Using an on-line experiment resembling a video conference, we examined the impact of face masks on speech comprehension. Typical-hearing listeners performed a speech-in-noise task while seeing talkers with visible lips, talkers wearing a surgical mask, or just the name of the talker displayed on screen. The target voice was masked by concurrent distracting talkers. We measured performance, confidence and listening effort scores, as well as meta-cognitive monitoring (the ability to adapt self-judgments to actual performance). Hiding the talkers behind a screen or concealing their lips via a face mask led to lower performance, lower confidence scores, and increased perceived effort. Moreover, meta-cognitive monitoring was worse when listening in these conditions compared with listening to an unmasked talker. These findings have implications on everyday communication for typical-hearing individuals and for hearing-impaired populations.

15.
Neuropsychologia ; 149: 107665, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33130161

RESUMO

When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.


Assuntos
Localização de Som , Realidade Virtual , Adaptação Fisiológica , Adulto , Sinais (Psicologia) , Audição , Humanos
16.
Cognition ; 204: 104409, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32717425

RESUMO

Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Adulto , Animais , Percepção Auditiva , Audição , Humanos
17.
Acta Psychol (Amst) ; 191: 261-270, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-30352360

RESUMO

In our study, we aimed to reduce bodily self-consciousness using a multisensory illusion (MI), and tested whether this manipulation increases Self-objectification (the psychological attitude to perceive one's own body as an object). Participants observed their own body from a first-person perspective, through a head-mounted display, while receiving incongruent (or congruent) visuo-tactile stimulation on their abdomen or arms. Results showed stronger feelings of disownership, loss of agency and sensation of being out of ones' own body during incongruent compare to congruent stimulation. This reduced bodily self-consciousness did not affect Self-objectification. However, self-objectification (as measured by the appearance of control beliefs subscale of the Objectified Body Consciousness questionnaire) was positively correlated with the MI strength. Moreover, we investigated the impact of MI and Self-objectification on body size estimation. We found systematic body size underestimation, irrespective of type of stimulation or tendency to Self-objectification. These results document a simple yet effective approach to alter bodily self-consciousness, which however spare Self-objectification and body size-perception.


Assuntos
Imagem Corporal , Emoções/fisiologia , Ilusões/fisiologia , Autoimagem , Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Imagem Corporal/psicologia , Tamanho Corporal/fisiologia , Estado de Consciência/fisiologia , Feminino , Humanos , Ilusões/psicologia , Estimulação Luminosa/métodos , Percepção de Tamanho/fisiologia , Inquéritos e Questionários , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...